Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Schedulers are critical for optimal resource utilization in high-performance computing. Traditional methods to evaluate schedulers are limited to post-deployment analysis, or simulators, which do not model associated infrastructure. In this work, we present the first-of-its-kind integration of scheduling and digital twins in HPC. This enables what-if studies to understand the impact of parameter configurations and scheduling decisions on the physical assets, even before deployment, or regarding changes not easily realizable in production. We (1) provide the first digital twin framework extended with scheduling capabilities, (2) integrate various top-tier HPC systems given their publicly available datasets, (3) implement extensions to integrate external scheduling simulators. Finally, we show how to (4) implement and evaluate incentive structures, as-well-as (5) evaluate machine learning based scheduling, in such novel digital-twin based meta-framework to prototype scheduling. Our work enables what-if scenarios of HPC systems to evaluate sustainability, and the impact on the simulated system.more » « lessFree, publicly-accessible full text available November 15, 2026
-
Computing platforms that package multiple types of memory, each with their own performance characteristics, are quickly becoming mainstream. To operate efficiently, heterogeneous memory architectures require new data management solutions that are able to match the needs of each application with an appropriate type of memory. As the primary generators of memory usage, applications create a great deal of information that can be useful for guiding memory management, but the community still lacks tools to collect, organize, and leverage this information effectively. To address this gap, this work introduces a novel software framework that collects and analyzesobject-levelinformation to guide memory tiering. The framework includes tools to monitor the capacity and usage of individual data objects, routines that aggregate and convert this information into tier recommendations for the host platform, and mechanisms to enforce these recommendations according to user-selected policies. Moreover, the developed tools and techniques are fully automatic, work on standard Linux systems, and do not require modification or recompilation of existing software. Using this framework, this study evaluates and compares the impact of a variety of design choices for memory tiering, including different policies for prioritizing objects for the fast memory tier as well as the frequency and timing of migration events. The results, collected on a modern Intel platform with conventional DDR4 SDRAM as well as Intel Optane NVRAM, show that guiding data tiering with object-level information can enable significant performance and efficiency benefits compared with standard hardware- and software-directed data-tiering strategies for a diverse set of memory-intensive workloads.more » « lessFree, publicly-accessible full text available March 31, 2026
-
Abstract The red hypergiant VY CMa is remarkable for its very visible record of high-mass-loss events observed over the range of wavelengths from the optical and infrared to the submillimeter region with Atacama Large Millimeter/submillimeter Array (ALMA). The SW Clump or SW knots are unique in the ejecta of VY CMa. Except for the central star, they are the brightest sources of dusty infrared emission in its complex ejecta. In this paper we combine the proper motions from the Hubble Space Telescope images, and infrared fluxes from 2 to 12μm with the12CO images from ALMA to determine their ages and mass estimates. The SW knots were ejected more than 200 yr ago with an active period lasting about 30 yr, and with a total mass in the Clump > 2 × 10−2M⊙.more » « lessFree, publicly-accessible full text available March 27, 2026
-
Researchers increasingly rely on aggregations of radiocarbon dates from archaeological sites as proxies for past human populations. This approach has been critiqued on several grounds, including the assumptions that material is deposited, preserved, and sampled in proportion to past population size. However, various attempts to quantitatively assess the approach suggest there may be some validity in assuming date counts reflect relative population size. To add to this conversation, here we conduct a preliminary analysis coupling estimates of ethnographic population density with late Holocene radiocarbon dates across all counties in California. Results show that counts of late Holocene radiocarbon-dated archaeological sites increase significantly as a function of ethnographic population density. This trend is robust across varying sampling windows over the last 5000 BP. Though the majority of variation in dated-site counts remains unexplained by population density. Outliers reveal how departures from the central trend may be influenced by regional differences in research traditions, development-driven contract work, organic preservation, and landscape taphonomy. Overall, this exercise provides some support for the “dates-as-data” approach and offers insights into the conditions where the underlying assumptions may or may not hold.more » « less
-
As scaling of conventional memory devices has stalled, many high-end computing systems have begun to incorporate alternative memory technologies to meet performance goals. Since these technologies present distinct advantages and tradeoffs compared to conventional DDR* SDRAM, such as higher bandwidth with lower capacity or vice versa, they are typically packaged alongside conventional SDRAM in a heterogeneous memory architecture. To utilize the different types of memory efficiently, new data management strategies are needed to match application usage to the best available memory technology. However, current proposals for managing heterogeneous memories are limited, because they either (1) do not consider high-level application behavior when assigning data to different types of memory or (2) require separate program execution (with a representative input) to collect information about how the application uses memory resources. This work presents a new data management toolset to address the limitations of existing approaches for managing complex memories. It extends the application runtime layer with automated monitoring and management routines that assign application data to the best tier of memory based on previous usage, without any need for source code modification or a separate profiling run. It evaluates this approach on a state-of-the-art server platform with both conventional DDR4 SDRAM and non-volatile Intel Optane DC memory, using both memory-intensive high-performance computing (HPC) applications as well as standard benchmarks. Overall, the results show that this approach improves program performance significantly compared to a standard unguided approach across a variety of workloads and system configurations. The HPC applications exhibit the largest benefits, with speedups ranging from 1.4× to 7× in the best cases. Additionally, we show that this approach achieves similar performance as a comparable offline profiling-based approach after a short startup period, without requiring separate program execution or offline analysis steps.more » « less
-
null (Ed.)Many high-performance systems now include different types of memory devices within the same compute platform to meet strict performance and cost constraints. Such heterogeneous memory systems often include an upper-level tier with better performance, but limited capacity, and lower-level tiers with higher capacity, but less bandwidth and longer latencies for reads and writes. To utilize the different memory layers efficiently, current systems rely on hardware-directed, memory -side caching or they provide facilities in the operating system (OS) that allow applications to make their own data-tier assignments. Since these data management options each come with their own set of trade-offs, many systems also include mixed data management configurations that allow applications to employ hardware- and software-directed management simultaneously, but for different portions of their address space. Despite the opportunity to address limitations of stand-alone data management options, such mixed management modes are under-utilized in practice, and have not been evaluated in prior studies of complex memory hardware. In this work, we develop custom program profiling, configurations, and policies to study the potential of mixed data management modes to outperform hardware- or software-based management schemes alone. Our experiments, conducted on an Intel ® Knights Landing platform with high-bandwidth memory, demonstrate that the mixed data management mode achieves the same or better performance than the best stand-alone option for five memory intensive benchmark applications (run separately and in isolation), resulting in an average speedup compared to the best stand-alone policy of over 10 %, on average.more » « less
-
Schmidt, Dirk; Schreiber, Laura; Vernet, Elise (Ed.)The MMT Adaptive optics exoPlanet characterization System (MAPS) is an exoplanet characterization program that encompasses instrument development, observational science, and education. The instrument we are developing for the 6.5m MMT observatory is multi-faceted, including a refurbished 336-actuator adaptive secondary mirror (ASM); two pyramid wavefront sensors (PyWFS's); a 1-kHz adaptive optics (AO) control loop; a high-resolution and long-wavelength upgrade to the Arizona infraRed Imager and Echelle Spectrograph (ARIES); and a new-AO-optimized upgrade to the MMT-sensitive polarimeter (MMT-Pol). With the completed MAPS instrument, we will execute a 60-night science program to characterize the atmospheric composition and dynamics of ~50-100 planets around other stars. The project is approaching first light, anticipated for Summer/Fall of 2022. With the electrical and optical tests complete and passing the review milestone for the ASM's development, it is currently being tuned. The PyWFS's are being built and integrated in their respective labs: the visible-light PyWFS at the University of Arizona (UA), and the infrared PyWFS at the University of Toronto (UT). The top-level AO control software is being developed at UA, with an on-sky calibration algorithm being developed at UT. ARIES development continues at UA, and MMT-Pol development is at the University of Minnesota. The science and education programs are in planning and preparation. We will present the design and development of the entire MAPS instrument and project, including an overview of lab results and next steps.more » « less
-
Schmidt, Dirk; Schreiber, Laura; Vernet, Elise (Ed.)We are upgrading and refurbishing the first-generation adaptive-secondary mirror (ASM)-based AO system on the 6.5-m MMT in Arizona, in an NSF MSIP-funded program that will create a unique facility specialized for exoplanet characterization. This update includes a third-generation ASM with embedded electronics for low power consumption, two pyramid wavefront sensors (optical and near-IR), and an upgraded ARIES science camera for high-resolution spectroscopy (HRS) from 1-5 μm and MMT-POL science camera for sensitive polarization mapping. Digital electronics have been incorporated into each of the 336 actuators, simplifying hub-level electronics and reducing the total power to 300 W, down from 1800 W in the legacy system — reducing cooling requirements from active coolant to passive ambient cooling. An improved internal control law allows for electronic damping and a faster response. The dual pyramid wavefront sensors allow for a choice between optical or IR wavefront sensing depending on guide star magnitude, color, and extinction. The HRS upgrade to ARIES enables crosscorrelation of molecular templates to extract atmospheric parameters of exoplanets. The combination of these upgrades creates a workhorse instrument for exoplanet characterization via AO and HRS to separate planets from their host stars, with broad wavelength coverage and polarization to probe a range of molecular species in exoplanet atmospheres.more » « less
An official website of the United States government
